Marta Marchiori Manerba

Marta Marchiori Manerba

Involved in the research line 1 ▪ 2 ▪ 5

Role: Phd Student

Affiliation: University of Pisa


Marta Marchiori Manerba (she/her) is a Graduate Student in Digital Humanities at the University of Pisa and currently a Ph.D. student in AI. During her studies, she explored the relationship between technology and human rights. She works on Fairness and Explainability in Natural Language Processing, focusing on digital discrimination and algorithmic biases. She holds a Bachelor’s Degree in Digital Humanities from the University of Pisa, during which she developed a strong interest in hate speech detection towards minorities in online discourse.

10.

[MM2023]
Investigating Debiasing Effects on Classification and Explainability
Marchiori Manerba Marta, Morini Virginia (2023) - Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. In Machine Learning and Principles and Practice of Knowledge Discovery in Databases

Abstract

Biases can arise and be introduced during each phase of a supervised learning pipeline, eventually leading to harm. Within the task of automatic abusive language detection, this matter becomes particularly severe since unintended bias towards sensitive topics such as gender, sexual orientation, or ethnicity can harm underrepresented groups. The role of the datasets used to train these models is crucial to address these challenges. In this contribution, we investigate whether explainability methods can expose racial dialect bias attested within a popular dataset for abusive language detection. Through preliminary experiments, we found that pure explainability techniques cannot effectively uncover biases within the dataset under analysis: the rooted stereotypes are often more implicit and complex to retrieve.

25.

[MG2022]
Investigating Debiasing Effects on Classification and Explainability
Marta Marchiori Manerba, Guidotti Riccardo (2022) - Conference on AI, Ethics, and Society (AIES 2022). In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES'22)

Abstract

During each stage of a dataset creation and development process, harmful biases can be accidentally introduced, leading to models that perpetuates marginalization and discrimination of minorities, as the role of the data used during the training is critical. We propose an evaluation framework that investigates the impact on classification and explainability of bias mitigation preprocessing techniques used to assess data imbalances concerning minorities' representativeness and mitigate the skewed distributions discovered. Our evaluation focuses on assessing fairness, explainability and performance metrics. We analyze the behavior of local model-agnostic explainers on the original and mitigated datasets to examine whether the proxy models learned by the explainability techniques to mimic the black-boxes disproportionately rely on sensitive attributes, demonstrating biases rooted in the explainers. We conduct several experiments about known biased datasets to demonstrate our proposal’s novelty and effectiveness for evaluation and bias detection purposes.

More information

Research Line 1▪5

34.

[MG2021]
FairShades: Fairness Auditing via Explainability in Abusive Language Detection Systems
Marchiori Manerba Marta, Guidotti Riccardo (2021) - Third Conference on Cognitive Machine Intelligence (COGMI) 2021. In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI)

Abstract

At every stage of a supervised learning process, harmful biases can arise and be inadvertently introduced, ultimately leading to marginalization, discrimination, and abuse towards minorities. This phenomenon becomes particularly impactful in the sensitive real-world context of abusive language detection systems, where non-discrimination is difficult to assess. In addition, given the opaqueness of their internal behavior, the dynamics leading a model to a certain decision are often not clear nor accountable, and significant problems of trust could emerge. A robust value-oriented evaluation of models' fairness is therefore necessary. In this paper, we present FairShades, a model-agnostic approach for auditing the outcomes of abusive language detection systems. Combining explainability and fairness evaluation, FairShades can identify unintended biases and sensitive categories towards which models are most discriminative. This objective is pursued through the auditing of meaningful counterfactuals generated within CheckList framework. We conduct several experiments on BERT-based models to demonstrate our proposal's novelty and effectiveness for unmasking biases.

More information

External Link

Research Line 1▪5